Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 130
Filter
1.
2023 9th International Conference on eDemocracy and eGovernment, ICEDEG 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20244243

ABSTRACT

Messaging platforms like WhatsApp are some of the largest contributors to the spread of Covid-19 health misinformation but they also play a critical role in disseminating credible information and reaching populations at scale. This study explores the relationships between verification behaviours and intention to share information to users that report high trust in their personal network and users that report high trust in authoritative sources. The study was conducted as a survey delivered through WhatsApp to users of the WHO HealthAlert chatbot service. An adapted theoretical model from news verification behaviours was used to determine the correlation between the constructs. Due to an excellent response, 5477 usable responses were obtained, so the adapted research model could be tested by means of a Structural Equation Model (SEM) using the partial least squares algorithm on SmartPLS4. The findings suggest significant correlations between the constructs and suggest that participants that have reported high levels of trust in authoritative sources are less likely to share information due to their increased behaviours to verify information. © 2023 IEEE.

2.
Proceedings - 2022 13th International Congress on Advanced Applied Informatics Winter, IIAI-AAI-Winter 2022 ; : 181-188, 2022.
Article in English | Scopus | ID: covidwho-20243412

ABSTRACT

On social media, misinformation can spread quickly, posing serious problems. Understanding the content and sensitive nature of fake news and misinformation is critical to prevent the damage caused by them. To this end, the characteristics of information must first be discerned. In this paper, we propose a transformer-based hybrid ensemble model to detect misinformation on the Internet. First, false and true news on Covid-19 were analyzed, and various text classification tasks were performed to understand their content. The results were utilized in the proposed hybrid ensemble learning model. Our analysis revealed promising results, establishing the capability of the proposed system to detect misinformation on social media. The final model exhibited an excellent F1 score (0.98) and accuracy (0.97). The AUC (Area Under The Curve) score was also high at 0.98, and the ROC (Receiver Operating Characteristics) curve revealed that the true-positive rate of the data was close to one in this model. Thus, the proposed hybrid model was demonstrated to be successful in recognizing false information online. © 2022 IEEE.

3.
ACM Web Conference 2023 - Companion of the World Wide Web Conference, WWW 2023 ; : 688-693, 2023.
Article in English | Scopus | ID: covidwho-20241249

ABSTRACT

Online misinformation has become a major concern in recent years, and it has been further emphasized during the COVID-19 pandemic. Social media platforms, such as Twitter, can be serious vectors of misinformation online. In order to better understand the spread of these fake-news, lies, deceptions, and rumours, we analyze the correlations between the following textual features in tweets: emotion, sentiment, political bias, stance, veracity and conspiracy theories. We train several transformer-based classifiers from multiple datasets to detect these textual features and identify potential correlations using conditional distributions of the labels. Our results show that the online discourse regarding some topics, such as COVID-19 regulations or conspiracy theories, is highly controversial and reflects the actual U.S. political landscape. © 2023 ACM.

4.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) ; 13741 LNCS:466-479, 2023.
Article in English | Scopus | ID: covidwho-20240136

ABSTRACT

Online news and information sources are convenient and accessible ways to learn about current issues. For instance, more than 300 million people engage with posts on Twitter globally, which provides the possibility to disseminate misleading information. There are numerous cases where violent crimes have been committed due to fake news. This research presents the CovidMis20 dataset (COVID-19 Misinformation 2020 dataset), which consists of 1,375,592 tweets collected from February to July 2020. CovidMis20 can be automatically updated to fetch the latest news and is publicly available at: https://github.com/everythingguy/CovidMis20. This research was conducted using Bi-LSTM deep learning and an ensemble CNN+Bi-GRU for fake news detection. The results showed that, with testing accuracy of 92.23% and 90.56%, respectively, the ensemble CNN+Bi-GRU model consistently provided higher accuracy than the Bi-LSTM model. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

5.
21st IEEE International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2022 ; : 214-220, 2022.
Article in English | Scopus | ID: covidwho-2321950

ABSTRACT

Social media has become a source of information for many people because of its freedom of use. As a result, fake news spread quickly and easily, regardless of its credibility, especially over the past decade. The vast amount of information being shared has fraudulent practices that negatively affect readers' cognitive abilities and mental health. In this study, we aim to introduce a new Arabic COVID-19 dataset for fake news related to COVID-19 from Twitter and Facebook. Afterward, we applied two pre-Trained models of classification AraBERT and BERT base Arabic. As a result, AraBERT models obtained better accuracy than BERT base Arabic in two datasets. © 2022 IEEE.

6.
Proceedings of the ACM on Human-Computer Interaction ; 7(CSCW1), 2023.
Article in English | Scopus | ID: covidwho-2320340

ABSTRACT

While COVID-19 text misinformation has already been investigated by various scholars, fewer research efforts have been devoted to characterizing and understanding COVID-19 misinformation that is carried out through visuals like photographs and memes. In this paper, we present a mixed-method analysis of image-based COVID-19 misinformation in 2020 on Twitter. We deploy a computational pipeline to identify COVID-19 related tweets, download the images contained in them, and group together visually similar images. We then develop a codebook to characterize COVID-19 misinformation and manually label images as misinformation or not. Finally, we perform a quantitative analysis of tweets containing COVID-19 misinformation images. We identify five types of COVID-19 misinformation, from a wrong understanding of the threat severity of COVID-19 to the promotion of fake cures and conspiracy theories. We also find that tweets containing COVID-19 misinformation images do not receive more interactions than baseline tweets with random images posted by the same set of users. As for temporal properties, COVID-19 misinformation images are shared for longer periods of time than non-misinformation ones, as well as have longer burst times. we compare non-misinformation images instead of random images, and so it is not a direct comparison. When looking at the users sharing COVID-19 misinformation images on Twitter from the perspective of their political leanings, we find that pro-Democrat and pro-Republican users share a similar amount of tweets containing misleading or false COVID-19 images. However, the types of images that they share are different: while pro-Democrat users focus on misleading claims about the Trump administration's response to the pandemic, as well as often sharing manipulated images intended as satire, pro-Republican users often promote hydroxychloroquine, an ineffective medicine against COVID-19, as well as conspiracy theories about the origin of the virus. Our analysis sets a basis for better understanding COVID-19 misinformation images on social media and the nuances in effectively moderate them. © 2023 ACM.

7.
Computers, Materials and Continua ; 75(2):4255-4272, 2023.
Article in English | Scopus | ID: covidwho-2312440

ABSTRACT

Nowadays, the usage of social media platforms is rapidly increasing, and rumours or false information are also rising, especially among Arab nations. This false information is harmful to society and individuals. Blocking and detecting the spread of fake news in Arabic becomes critical. Several artificial intelligence (AI) methods, including contemporary transformer techniques, BERT, were used to detect fake news. Thus, fake news in Arabic is identified by utilizing AI approaches. This article develops a new hunter-prey optimization with hybrid deep learning-based fake news detection (HPOHDL-FND) model on the Arabic corpus. The HPOHDL-FND technique undergoes extensive data pre-processing steps to transform the input data into a useful format. Besides, the HPOHDL-FND technique utilizes long-term memory with a recurrent neural network (LSTM-RNN) model for fake news detection and classification. Finally, hunter prey optimization (HPO) algorithm is exploited for optimal modification of the hyperparameters related to the LSTM-RNN model. The performance validation of the HPOHDL-FND technique is tested using two Arabic datasets. The outcomes exemplified better performance over the other existing techniques with maximum accuracy of 96.57% and 93.53% on Covid19Fakes and satirical datasets, respectively. © 2023 Tech Science Press. All rights reserved.

8.
7th International Symposium on Intelligent Informatics, ISI 2022 ; 333:471-486, 2023.
Article in English | Scopus | ID: covidwho-2291210

ABSTRACT

The Covid-19 pandemic has increased the global dependency on the internet. Millions of individuals use social networking sites to not only share information, but also their personal opinions. These facts and opinions are frequently unconfirmed, which result in the spread of incorrect information, generally alluded to as "Fake Content”. The most challenging aspect of social media is in determining the source of information. It's difficult to figure out who generated fake news once it's gone viral. Most available computational models have a key flaw in that they rely on the presence of inaccurate information to generate meaningful features, making disinformation mitigation measures difficult to predict. This paper presents a parallel approach to false information mitigation drawn from the field of Epidemiology using SIR(Susceptible, Infected, Recovered) to model the impact of fake data dissemination during Covid-19. SIR simulation is done using NetLogo in which the population is made up of two agents: Fake news believers and non-believers. To confirm our work, the concept of trust is also discussed which is a fundamental component of any fake news interaction. The level of trust can be expressed by assigning each node a pair of trust scores. We ran our experiments based on three common evaluation metrics: Accuracy, Precision, and Recall. The hybrid model shows an increase in accuracy by 81.4%, 77.1%, and 91.8% for the respective networks. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

9.
2023 International Conference on Intelligent Systems, Advanced Computing and Communication, ISACC 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2305549

ABSTRACT

With the advancement in technology, web technol-ogy in the form of social media is one of the main origins of information worldwide. Web technology has helped people to enhance their ability to know, learn, and gain knowledge about things around them. The benefits that technological advancement offers are boundless. However, apart from these, social media also has major issues related to problems and challenges concerning filtering out the right information from the wrong ones. The sources of information become highly unreliable at times, and it is difficult to differentiate and decipher real news or real information from fake ones. Cybercrime, through fraud mechanisms, is a pervasive menace permeating media technology every single day. Hence, this article reports an attempt to fake news detection in Khasi social media data. To execute this work, the data analyzed are extracted from different Internet platforms mainly from social media articles and posts. The dataset consists of fake news and also real news based on COVID-19, and also other forms of wrong information disseminated throughout the pandemic period. We have manually annotated the assembled Khasi news and the data set consists of 116 news data. We have used three machine learning techniques in our experiment, the Decision Tree, the Logistic Regression, and the Random Forest approach. We have observed in the experimental results that the Decision Tree-based approach yielded accurate results with an accuracy of 87%, whereas the Logistic Regression approach yielded an accuracy of 82% and the Random Forest approach yielded an accuracy of 75%. © 2023 IEEE.

10.
38th International Conference on Computers and Their Applications, CATA 2023 ; 91:124-137, 2023.
Article in English | Scopus | ID: covidwho-2304334

ABSTRACT

On social media, false information can proliferate quickly and cause big issues. To minimize the harm caused by false information, it is essential to comprehend its sensitive nature and content. To achieve this, it is necessary to first identify the characteristics of information. To identify false information on the internet, we suggest an ensemble model based on transformers in this paper. First, various text classification tasks were carried out to understand the content of false and true news on Covid-19. The proposed hybrid ensemble learning model used the results. The results of our analysis were encouraging, demonstrating that the suggested system can identify false information on social media. All the classification tasks were validated and shows outstanding results. The final model showed excellent accuracy (0.99) and F1 score (0.99). The Receiver Operating Characteristics (ROC) curve showed that the true-positive rate of the data in this model was close to one, and the AUC (Area Under The Curve) score was also very high at 0.99. Thus, it was shown that the suggested model was effective at identifying false information online. © 2023, EasyChair. All rights reserved.

11.
2023 International Conference on Advances in Intelligent Computing and Applications, AICAPS 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2304118

ABSTRACT

Social networks have had a significant impact on people's personal and professional life all around the world. Since the COVID-19 pandemic has boosted the use of digital media among people, fake news and reviews have had a stronger impact on society in recent years. This study demonstrates how the stiffness index may be used to model the spread of fake news in Indian states. We demonstrate that the speed at which fake news circulates through online social networks increases with a stiffness index. We conducted a stiffness analysis for all Indian states to assess the spread of fake information in each Indian state. The stiffness analysis of the conventional SIR model, one of the widely used approaches to describe the propagation of rumors in social networks, serves as an explanation and illustration of our proposition. The rise in fake news in our society is also justified by a comparison of the stiffness index for India before and after the COVID-19 outbreak. The study provides governments and policymakers with a more comprehensive understanding of the value of early intervention to combat the spread of false information via digital media. © 2023 IEEE.

12.
2nd International Conference on Electronics and Renewable Systems, ICEARS 2023 ; : 961-967, 2023.
Article in English | Scopus | ID: covidwho-2303023

ABSTRACT

With cyberspace's continuous evolution, online reviews play a crucial role in determining business success in various sectors, ranging from restaurants and hotels to e-commerce applications. Typically, a favorable review for a specific product draws in more consumers and results in a significant boost in sales. Unfortunately, a few businesses are using deceptive methods to improve their online reputation by using fake reviews of competitors. As a result, detecting fake reviews has become a difficult and ever-changing research field. Verbal characteristics extracted from review text, as well as nonverbal features such as the reviewer's engagement metrics, the IP address of the device, and so on, play an important role in detecting fake reviews. This article examines and compares various machine learning techniques for detecting deceptive reviews on various online platforms such as e-commerce websites such as Amazon and online review websites such as Yelp, among others. © 2023 IEEE.

13.
2023 International Conference on Computing, Networking and Communications, ICNC 2023 ; : 463-467, 2023.
Article in English | Scopus | ID: covidwho-2298957

ABSTRACT

COVID-19 pandemic has been impacting people's everyday life for more than two years. With the fast spreading of online communication and social media platforms, the number of fake news related to COVID-19 is in a rapid growth and propagates misleading information to the public. To tackle this challenge and stop the spreading of fake news, this project proposes to build an online software detector specifically for COVID-19 news to classify whether the news is trustworthy. Specifically, as it is difficult to train a generic model for all domains, a base model is developed and fine-tuned to adapt the specific domain context. In addition, a data collection mechanism is developed to get latest COVID-19 news data and to keep the model fresh. We then conducted performance comparisons among different models using traditional machine learning techniques, ensemble machine learning, and the state-of-the-art deep learning mechanism. The most effective model is deployed to our online website for COVID-19 related fake news detection. © 2023 IEEE.

14.
1st International Conference in Advanced Innovation on Smart City, ICAISC 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2297802

ABSTRACT

Since its emergence in December 2019, there have been numerous news of COVID-19 pandemic shared on social media, which contain information from both reliable and unreliable medical sources. News and misleading information spread quickly on social media, which can lead to anxiety, unwanted exposure to medical remedies, etc. Rapid detection of fake news can reduce their spread. In this paper, we aim to create an intelligent system to detect misleading information about COVID-19 using deep learning techniques based on LSTM and BLSTM architectures. Data used to construct the DL models are text type and need to be transformed to numbers. We test, in this paper the efficiency of three vectorization techniques: Bag of words, Word2Vec and Bert. The experimental study showed that the best performance was given by LSTM model with BERT by achieving an accuracy of 91% of the test set. © 2023 IEEE.

15.
20th IEEE International Symposium on Parallel and Distributed Processing with Applications, 12th IEEE International Conference on Big Data and Cloud Computing, 12th IEEE International Conference on Sustainable Computing and Communications and 15th IEEE International Conference on Social Computing and Networking, ISPA/BDCloud/SocialCom/SustainCom 2022 ; : 426-434, 2022.
Article in English | Scopus | ID: covidwho-2294233

ABSTRACT

False claims or Fake News related to the health care or medicine field on Social Media have garnered increasing amounts of interest, especially in the aftermath of the COVID-19 pandemic. False claims about the pan-demic which spread on social media have contributed to vaccine hesitancy and lack of trust in the advise of medical professionals. If not detected and disproved early, such claims can complicate future pandemic responses. We focus on false claims in the field of Neurodevelopmental Disorders (NDDs), which is an umbrella term for a group of disorders that includes Autism, ADHD, Cerebral Palsy, etc. In this paper we present our approach to automated systems for fact-checking medical articles related to NDDs. We also present an annotated dataset of 116 web pages which we use to test our model and present our results. © 2022 IEEE.

16.
2022 International Congress of Trends in Educational Innovation, CITIE 2022 ; 3353:118-126, 2023.
Article in English | Scopus | ID: covidwho-2272055

ABSTRACT

The use of social media, low literacy, fast information sharing and preprint services are identified as the main causes of the infodemic [4] and among its consequences we find that it can promote public health risk behaviors globally. The results of Fake news represents a threat to societies in the context of the pandemic. The aim of this article is to review existing research on fake news in the last 2 years, discussing the characteristics of infodemics, media/digital literacy and its impact on society, as well as highlighting mechanisms to detect and curb fake news on covid-19 in social networks. Thirty articles were analyzed and selected from 1354 open access articles on this subject. The conclusion was that knowledge of fake news should be taken note of due to the harmful effects on society, considering the informational contexts (epistemic, normative and emotional), together with media literacy to increase trust and emphasize public health messages with emotionally relevant and scientifically based content, in order to continue conducting research that allows a 100% effective recognition and elimination of untruthful information on social networks. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

17.
2022 IEEE International Conference on Big Data, Big Data 2022 ; : 2305-2308, 2022.
Article in English | Scopus | ID: covidwho-2268291

ABSTRACT

Classifying whether collected information related to emerging topics and domains is fake/incorrect is not an easy task because we do not have enough labeled data in the domains. Given labeled data from source domains (e.g., gossip and health) and limited labeled data from a newly emerging target domain (e.g., COVID-19 and Ukraine war), simply applying knowledge learned from source domains to the target domain may not work well because of different data distribution. To solve the problem, in this paper, we propose an energy-based domain adaptation with active learning for early misinformation detection. Given three real world news datasets, we evaluate our proposed model against two baselines in both domain adaptation and the whole pipeline. Our model outperforms the baselines, improving at least 5% in the domain adaptation task and 10% in the whole pipeline, showing effectiveness of our proposed approach. © 2022 IEEE.

18.
2022 International Conference on Data Analytics for Business and Industry, ICDABI 2022 ; : 509-513, 2022.
Article in English | Scopus | ID: covidwho-2265608

ABSTRACT

Combating fake news on social media is a critical challenge in today's digital age, especially when misinformation is spread regarding vital matters such as the Covid-19 pandemic. Manual verification of all content is infeasible;hence, Artificial Intelligence is used to classify fake news. Our ensemble model uses multiple Natural Language Processing techniques to analyze the truthfulness of the text in tweets. We create custom parameters that analyze the consistency and truthfulness of domains contained in hyperlinked URLs. We then combine these parameters with the results of our deep learning models to achieve classification with greater than 99% accuracy. We have proposed a novel method to calculate a custom coefficient, the Combined Metric of Prediction Uncertainty (CMPU), which is a measure of how uncertain the model is of its classification of a given tweet. Using CMPU, we have proposed the creation of a priority queue following which the tweets classified with the lowest certainty can be manually verified. By manually verifying 3.93% of tweets, we were able to improve the accuracy from 99.02% to 99.77%. © 2022 IEEE.

19.
1st International Workshop on Measuring Ontologies for Value Enhancement, MOVE 2020 ; 1694 CCIS:57-72, 2022.
Article in English | Scopus | ID: covidwho-2261377

ABSTRACT

Fighting against misinformation and computational propaganda requires integrated efforts from various domains like law or education, but there is also a need for computational tools. I investigate here how reasoning in Description Logics (DLs) can detect inconsistencies between trusted knowledge and not trusted sources. The proposed method is exemplified on fake news for the new coronavirus. Indeed, in the context of the Covid-19 pandemic, many were quick to spread deceptive information. Since, the not-trusted information comes in natural language (e.g. "Covid-19 affects only the elderly”), the natural language text is automatically converted into DLs using the FRED tool. The resulted knowledge graph formalised in Description Logics is merged with the trusted ontologies on Covid-10. Reasoning in Description Logics is then performed with the Racer reasoner, which is responsable to detect inconsistencies within the ontology. When detecting inconsistencies, a "red flag” is raised to signal possible fake news. The reasoner can provide justifications for the detected inconsistency. This availability of justifications is the main advantage compared to approaches based on machine learning, since the system is able to explain its reasoning steps to a human agent. Hence, the approach is a step towards human-centric AI systems. The main challenge remains to improve the technology which automatically translates text into some formal representation. © 2022, Springer Nature Switzerland AG.

20.
1st International Conference on Computational Science and Technology, ICCST 2022 ; : 821-824, 2022.
Article in English | Scopus | ID: covidwho-2260303

ABSTRACT

Since the adoption of the internet as a medium of communication of information, fake or false information or news has always been a major issue. Incidents of false information have always increased at times of crisis on national or international scales. The world witnessed a global pandemic from the Coronavirus, causing a complete disruption in the functioning of society. News of bogus cures, home remedies, and medicines started to make their way around the world. The number of incidents of such false news only increased as the pandemic worsened and more people were falling sick and dying. In times of desperation, people can easily be persuaded to try unverified and possibly dangerous medicines or cures, that can cost them their money as well as health. In this paper, natural language processing is used to first identify and differentiate text that has information regarding Covid 19 from the text that does not contain information regarding Covid 19. Word frequency scores like TF and IDF scores are then calculated. The intent of the text is then analyzed by observing the mannerisms detected in false news. With this analysis, the potential of the text to be false or fake is then determined. This research intends to explore the linguistics of false news and to get one step ahead in identifying fake news. The same methodology can be used to analyze data related to other specific topics. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL